Adversarial Margin Maximization Networks

نویسندگان

چکیده

The tremendous recent success of deep neural networks (DNNs) has sparked a surge interest in understanding their predictive ability. Unlike the human visual system which is able to generalize robustly and learn with little supervision, DNNs normally require massive amount data new concepts. In addition, research works also show that are vulnerable adversarial examples-maliciously generated images seem perceptually similar natural ones but actually formed fool learning models, means models have problem generalizing unseen certain type distortions. this paper, we analyze generalization ability comprehensively attempt improve it from geometric point view. We propose margin maximization (AMM), learning-based regularization exploits an perturbation as proxy. It encourages large input space, just like support vector machines. With differentiable formulation perturbation, train regularized simply through back-propagation end-to-end manner. Experimental results on various datasets (including MNIST, CIFAR-10/100, SVHN ImageNet) different DNN architectures demonstrate superiority our method over previous state-of-the-arts. Code for reproducing will be made publicly available.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

MAGAN: Margin Adaptation for Generative Adversarial Networks

We propose a novel training procedure for Generative Adversarial Networks (GANs) to improve stability and performance by using an adaptive hinge loss objective function. We estimate the appropriate hinge loss margin with the expected energy of the target distribution, and derive both a principled criterion for updating the margin and an approximate convergence measure. The resulting training pr...

متن کامل

Adversarial Active Learning for Deep Networks: a Margin Based Approach

We propose a new active learning strategy designed for deep neural networks. The goal is to minimize the number of data annotation queried from an oracle during training. Previous active learning strategies scalable for deep networks were mostly based on uncertain sample selection. In this work, we focus on examples lying close to the decision boundary. Based on theoretical works on margin theo...

متن کامل

Gang of GANs: Generative Adversarial Networks with Maximum Margin Ranking

Traditional generative adversarial networks (GAN) and many of its variants are trained by minimizing the KL or JS-divergence loss that measures how close the generated data distribution is from the true data distribution. A recent advance called the WGAN based on Wasserstein distance can improve on the KL and JS-divergence based GANs, and alleviate the gradient vanishing, instability, and mode ...

متن کامل

Network Utility Maximization in Adversarial Environments

Stochastic models have been dominant in network optimization theory for over two decades, due to their analytical tractability. However, these models fail to capture non-stationary or even adversarial network dynamics which are of increasing importance for modeling the behavior of networks under malicious attacks or characterizing short-term transient behavior. In this paper, we consider the ne...

متن کامل

Orthogonal Margin Maximization Projection for Gait Recognition

An efficient supervised orthogonal nonlinear dimensionality reduction algorithm, namely orthogonal margin maximization projection (OMMP), is presented for gait recognition in this paper. Taking the local neighborhood geometry structure and class information into account, the proposed algorithm aims to find a projecting matrix by maximizing the local neighborhood margin between the different cla...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Pattern Analysis and Machine Intelligence

سال: 2021

ISSN: ['1939-3539', '2160-9292', '0162-8828']

DOI: https://doi.org/10.1109/tpami.2019.2948348